Goto

Collaborating Authors

 moral decision-making


Integrating Reason-Based Moral Decision-Making in the Reinforcement Learning Architecture

Dargasz, Lisa

arXiv.org Artificial Intelligence

Reinforcement Learning is a machine learning methodology that has demonstrated strong performance across a variety of tasks. In particular, it plays a central role in the development of artificial autonomous agents. As these agents become increasingly capable, market readiness is rapidly approaching, which means those agents, for example taking the form of humanoid robots or autonomous cars, are poised to transition from laboratory prototypes to autonomous operation in real-world environments. This transition raises concerns leading to specific requirements for these systems - among them, the requirement that they are designed to behave ethically. Crucially, research directed toward building agents that fulfill the requirement to behave ethically - referred to as artificial moral agents(AMAs) - has to address a range of challenges at the intersection of computer science and philosophy. This study explores the development of reason-based artificial moral agents (RBAMAs). RBAMAs are build on an extension of the reinforcement learning architecture to enable moral decision-making based on sound normative reasoning, which is achieved by equipping the agent with the capacity to learn a reason-theory - a theory which enables it to process morally relevant propositions to derive moral obligations - through case-based feedback. They are designed such that they adapt their behavior to ensure conformance to these obligations while they pursue their designated tasks. These features contribute to the moral justifiability of the their actions, their moral robustness, and their moral trustworthiness, which proposes the extended architecture as a concrete and deployable framework for the development of AMAs that fulfills key ethical desiderata. This study presents a first implementation of an RBAMA and demonstrates the potential of RBAMAs in initial experiments.


Can AI Model the Complexities of Human Moral Decision-Making? A Qualitative Study of Kidney Allocation Decisions

Keswani, Vijay, Conitzer, Vincent, Sinnott-Armstrong, Walter, Nguyen, Breanna K., Heidari, Hoda, Borg, Jana Schaich

arXiv.org Artificial Intelligence

A growing body of work in Ethical AI attempts to capture human moral judgments through simple computational models. The key question we address in this work is whether such simple AI models capture {the critical} nuances of moral decision-making by focusing on the use case of kidney allocation. We conducted twenty interviews where participants explained their rationale for their judgments about who should receive a kidney. We observe participants: (a) value patients' morally-relevant attributes to different degrees; (b) use diverse decision-making processes, citing heuristics to reduce decision complexity; (c) can change their opinions; (d) sometimes lack confidence in their decisions (e.g., due to incomplete information); and (e) express enthusiasm and concern regarding AI assisting humans in kidney allocation decisions. Based on these findings, we discuss challenges of computationally modeling moral judgments {as a stand-in for human input}, highlight drawbacks of current approaches, and suggest future directions to address these issues.


How Technology Can Help Us Become More Human

TIME - Tech

Profound changes to the substance and structure of our lives -- wrought by disruptive technologies ranging from smartphones and social media to newly ascendent AI -- often go unnoticed amidst the rush of daily life. Over 30 percent of U.S. adults report "almost constant" online activity, something that would have been impossible only two decades ago. From an early age, children are exposed to digital technologies, and one recent study found that two- and three-year-olds average two hours of screen time daily. Nor is this phenomenon simply a matter of media consumption. Ordinary market transactions, whether online shopping or home mortgage applications, are now facilitated through sophisticated algorithmic systems.


(Machine) Learning to Be Like Thee? For Algorithm Education, Not Training

Blazquez, Susana Perez, Hipolito, Inas

arXiv.org Artificial Intelligence

This paper argues that Machine Learning (ML) algorithms must be educated. ML-trained algorithms' moral decisions are ubiquitous in human society. Sometimes reverting the societal advances governments, NGOs and civil society have achieved with great effort in the last decades or are yet on the path to be achieved. While their decisions have an incommensurable impact on human societies, these algorithms are within the least educated agents known (data incomplete, un-inclusive, or biased). ML algorithms are not something separate from our human idiosyncrasy but an enactment of our most implicit prejudices and biases. Some research is devoted to "responsibility assignment" as a strategy to tackle immoral AI behaviour. Yet this paper argues that the solution for AI ethical decision-making resides in algorithm education" (as opposed to the "training") of ML. Drawing from an analogy between ML and child education for social responsibility, the paper offers clear directions for responsible and sustainable AI design, specifically with respect to how to educate algorithms to decide ethically.


An A.I. Pioneer on What We Should Really Fear - The New York Times

#artificialintelligence

Artificial intelligence stirs our highest ambitions and deepest fears like few other technologies. It's as if every gleaming and Promethean promise of machines able to perform tasks at speeds and with skills of which we can only dream carries with it a countervailing nightmare of human displacement and obsolescence. But despite recent A.I. breakthroughs in previously human-dominated realms of language and visual art -- the prose compositions of the GPT-3 language model and visual creations of the DALL-E 2 system have drawn intense interest -- our gravest concerns should probably be tempered. At least that's according to the computer scientist Yejin Choi, a 2022 recipient of the prestigious MacArthur "genius" grant who has been doing groundbreaking research on developing common sense and ethical reasoning in A.I. "There is a bit of hype around A.I. potential, as well as A.I. fear," admits Choi, who is 45. Which isn't to say the story of humans and A.I. will be without its surprises.


Moral Decision-Making in Medical Hybrid Intelligent Systems: A Team Design Patterns Approach to the Bias Mitigation and Data Sharing Design Problems

van Stijn, Jip

arXiv.org Artificial Intelligence

Increasing automation in the healthcare sector calls for a Hybrid Intelligence (HI) approach to closely study and design the collaboration of humans and autonomous machines. Ensuring that medical HI systems' decision-making is ethical is key. The use of Team Design Patterns (TDPs) can advance this goal by describing successful and reusable configurations of design problems in which decisions have a moral component, as well as through facilitating communication in multidisciplinary teams designing HI systems. For this research, TDPs were developed to describe a set of solutions for two design problems in a medical HI system: (1) mitigating harmful biases in machine learning algorithms and (2) sharing health and behavioral patient data with healthcare professionals and system developers. The Socio-Cognitive Engineering methodology was employed, integrating operational demands, human factors knowledge, and a technological analysis into a set of TDPs. A survey was created to assess the usability of the patterns on their understandability, effectiveness, and generalizability. The results showed that TDPs are a useful method to unambiguously describe solutions for diverse HI design problems with a moral component on varying abstraction levels, that are usable by a heterogeneous group of multidisciplinary researchers. Additionally, results indicated that the SCE approach and the developed questionnaire are suitable methods for creating and assessing TDPs. The study concludes with a set of proposed improvements to TDPs, including their integration with Interaction Design Patterns, the inclusion of several additional concepts, and a number of methodological improvements. Finally, the thesis recommends directions for future research.


Using Machine Learning to Guide Cognitive Modeling: A Case Study in Moral Reasoning

Agrawal, Mayank, Peterson, Joshua C., Griffiths, Thomas L.

arXiv.org Artificial Intelligence

Large-scale behavioral datasets enable researchers to use complex machine learning algorithms to better predict human behavior, yet this increased predictive power does not always lead to a better understanding of the behavior in question. In this paper, we outline a data-driven, iterative procedure that allows cognitive scientists to use machine learning to generate models that are both interpretable and accurate. We demonstrate this method in the domain of moral decision-making, where standard experimental approaches often identify relevant principles that influence human judgments, but fail to generalize these findings to "real world" situations that place these principles in conflict. The recently released Moral Machine dataset allows us to build a powerful model that can predict the outcomes of these conflicts while remaining simple enough to explain the basis behind human decisions.


An ethicist explains his 4 chief concerns about artificial intelligence

#artificialintelligence

Elon Musk's warning about artificial intelligence (AI) and Facebook bots' creation of a language that humans can't understand can conjure images of robots conquering the world in one's mind. While such an apocalypse may be far-fetched, a more realistic consequence of AI already exists and warrants serious concern: AI's ethical impact. AI works, in part, because complex algorithms adeptly identify, remember, and relate data. Although such machine processing has existed for decades, the difference now is that very powerful computers process terabytes of data and deliver meaningful results in real-time. Moreover, some machines can do what had been the exclusive domain of humans and other intelligent life: Learn on their own. It's this automated learning that introduces a critical question: Can machines learn to be moral?


An ethicist explains his 4 chief concerns about artificial intelligence

#artificialintelligence

Elon Musk's warning about artificial intelligence (AI) and Facebook bots' creation of a language that humans can't understand can conjure images of robots conquering the world in one's mind. While such an apocalypse may be far-fetched, a more realistic consequence of AI already exists and warrants serious concern: AI's ethical impact. AI works, in part, because complex algorithms adeptly identify, remember, and relate data. Although such machine processing has existed for decades, the difference now is that very powerful computers process terabytes of data and deliver meaningful results in real-time. Moreover, some machines can do what had been the exclusive domain of humans and other intelligent life: Learn on their own. It's this automated learning that introduces a critical question: Can machines learn to be moral?


Codifying the Ethics of Autonomous Cars - DZone AI

#artificialintelligence

The rise of automated vehicles has provoked a range of ethical and moral discussions, largely revolving around constructs such as the trolley dilemma, which nicely encapsulates the kind of moral decision-making an autonomous vehicle might be forced to make. Historically, it's been considered something that's beyond most autonomous systems. A recent study from The Institute of Cognitive Science at the University of Osnabrück suggests that the kind of moral thinking humans undertake can, in fact, be accurately modeled for autonomous systems to use. The system uses immersive virtual reality to study human behavior in a wide range of simulated road traffic scenarios. For instance, participants were asked to drive in a typical suburban setting in foggy weather.